Detecting Chickens From Social Media Data
Brad Cooley, Arpan Ojha School of Informatics, Computing, and Engineering at Indiana University
Myrna Cadena, School of Veterinary Medicine, University of California, Davis
Method
Top 1
Accuracy
Train
Accuracy
Time to
train/detect(s)
Neural Networks
CNN(Baseline)
75 95.8 240/60
VGG
-19 87 89 28800/720
ResNET
-50 98 99 3600/300
ConvNext
94 99 60/30
Transformers
ViT
35 45 600/5
ConvNeXt SOTA performance on farm vs game fowl. Green border indicates
correct classification whereas red border indicates incorrect classification.
Train/Test images obtained from Kaggle, Reddit, and Google.
1. Motivation
2. Challenges
Using ConvNeXt and transfer learning, our implementation achieves
breed-specific labeling and, when not available, differentiation between
game fowl and domestic fowl.
3. Initial Modeling of Chicken Body Shape
4. Experiments
We experimented with four different models:
Baseline CNN, horrible classification when
other animals were introduced (10-15%).
ResNet50, decent classification, but not good
enough to be breed specific
ConvNeXt, good breed specific classification
and excellent species classification(90% top5)
ViT, mediocre species classification(~35%)
5. Results
Above: Model complexity. All
images use adam and cross
entropy loss
Right: Comparison to various
models in terms of top 1 test ,
training accuracy and time to
detect .
There are times that our model didn't classify images even remotely correct.
For example, the middle image was classified as a lapdog.
6. Future Work
Refinement of ConvNeXt labeling techniques
Applying ConvNeXt to video as compared to images
More broad data collection strategies and sources
Try DeiT and other modern transformers.
Visual depiction of game
versus farm fowl
Given two broad
categories of chicken
game and farm
Estimate social media
websites for chicken data
Also, jointly estimate
type of chicken, game or
farm based for diseased
protection and isolation
Need to identify contours and feather data properly
Deeper layers perform better with more expressive
details documented
Model needs to be as discriminative as possible, so
we train against similar looking animals (e.g.
squirrels)
Handle different species, color, and orientation of
chickens including texture of feathers.
Solve limited data issues with stitching,
translation, and warping
Model needs to be of the "chickens versus the
world" type, where chickens are classified against
other animals / observable items in the world.
Out-of-the-box computer vision models (ResNet10,
ResNet50, etc.) tend to have a hard time classifying
different breeds of the same species
This problem becomes increasingly more
pronounced with data scraped from various social
media sources (Twitter, Craigslist, etc.)
We propose a way to use state-of-the-art (SOTA)
computer vision models to accurately label different
breeds of the same species (in our case, Chickens)
We utilize ConvNeXt and transfer learning to achieve
breed specific classification where possible
Acknowledgements: This project is in collaboration with the CE Poultry Lab at the University of California Davis.
Method
Layers/epochs Parameters (Million) Optimizer / Loss Function
CNN (baseline)
3/25 0.001 adam/Binary Cross entropy
VGG
-19 19/20 138 adam/Cross entropy
ResNET
-50 50/9 23 adam/Cross entropy
ConvNext
-/2(90 actual) 350 adamw/Cross entropy
ViT
12/20 87 adam/Cross entropy
Based on results, ConvNeXt is best suited for our purpose. It has the lowest
false positives and negatives. Test detection time needs to reduced.